113 research outputs found

    Deep Dynamic Factor Models

    Full text link
    We propose a novel deep neural net framework - that we refer to as Deep Dynamic Factor Model (D2FM) -, to encode the information available, from hundreds of macroeconomic and financial time-series into a handful of unobserved latent states. While similar in spirit to traditional dynamic factor models (DFMs), differently from those, this new class of models allows for nonlinearities between factors and observables due to the deep neural net structure. However, by design, the latent states of the model can still be interpreted as in a standard factor model. In an empirical application to the forecast and nowcast of economic conditions in the US, we show the potential of this framework in dealing with high dimensional, mixed frequencies and asynchronously published time series data. In a fully real-time out-of-sample exercise with US data, the D2FM improves over the performances of a state-of-the-art DFM

    Enhancing glomeruli segmentation through cross-species pre-training

    Get PDF
    The importance of kidney biopsy, a medical procedure in which a small tissue sample is extracted from the kidney for examination, is increasing due to the rising incidence of kidney disorders. This procedure helps diagnosing several kidney diseases which are cause of kidney function changes, as well as guiding treatment decisions, and evaluating the suitability of potential donor kidneys for transplantation. In this work, a deep learning system for the automatic segmentation of glomeruli in biopsy kidney images is presented. A novel cross-species transfer learning approach, in which a semantic segmentation network is trained on mouse kidney tissue images and then fine-tuned on human data, is proposed to boost the segmentation performance. The experiments conducted using two deep semantic segmentation networks, MobileNet and SegNeXt, demonstrated the effectiveness of the cross-species pre-training approach leading to an increased generalization ability of both models

    A multi-stage GAN for multi-organ chest X-ray image generation and segmentation

    Full text link
    Multi-organ segmentation of X-ray images is of fundamental importance for computer aided diagnosis systems. However, the most advanced semantic segmentation methods rely on deep learning and require a huge amount of labeled images, which are rarely available due to both the high cost of human resources and the time required for labeling. In this paper, we present a novel multi-stage generation algorithm based on Generative Adversarial Networks (GANs) that can produce synthetic images along with their semantic labels and can be used for data augmentation. The main feature of the method is that, unlike other approaches, generation occurs in several stages, which simplifies the procedure and allows it to be used on very small datasets. The method has been evaluated on the segmentation of chest radiographic images, showing promising results. The multistage approach achieves state-of-the-art and, when very few images are used to train the GANs, outperforms the corresponding single-stage approach

    analysis of brain nmr images for age estimation with deep learning

    Get PDF
    Abstract During the last decade, deep learning and Convolutional Neural Networks (CNNs) have produced a devastating impact on computer vision, yielding exceptional results on a variety of problems, including analysis of medical images. Recently, these techniques have been extended to 3D images with the downside of a large increase in the computational load. In particular, state-of-the-art CNNs have been used for brain Nuclear Magnetic Resonance (NMR) imaging, with the aim of estimating the patients' age. In fact, a large discrepancy between the real and the estimated age is a clear alarm for the onset of neurodegenerative diseases, such as some types of early dementia and Alzheimer's disease. In this paper, we propose an effective alternative to 3D convolutions that guarantees a significant reduction of the computational requirements for this kind of analysis. The proposed architectures achieve comparable results with the competitor 3D methods, requiring only a fraction of the training time and GPU memory

    Predicting zinc binding at the proteome level

    Get PDF
    BACKGROUND: Metalloproteins are proteins capable of binding one or more metal ions, which may be required for their biological function, for regulation of their activities or for structural purposes. Metal-binding properties remain difficult to predict as well as to investigate experimentally at the whole-proteome level. Consequently, the current knowledge about metalloproteins is only partial. RESULTS: The present work reports on the development of a machine learning method for the prediction of the zinc-binding state of pairs of nearby amino-acids, using predictors based on support vector machines. The predictor was trained using chains containing zinc-binding sites and non-metalloproteins in order to provide positive and negative examples. Results based on strong non-redundancy tests prove that (1) zinc-binding residues can be predicted and (2) modelling the correlation between the binding state of nearby residues significantly improves performance. The trained predictor was then applied to the human proteome. The present results were in good agreement with the outcomes of previous, highly manually curated, efforts for the identification of human zinc-binding proteins. Some unprecedented zinc-binding sites could be identified, and were further validated through structural modelling. The software implementing the predictor is freely available at: CONCLUSION: The proposed approach constitutes a highly automated tool for the identification of metalloproteins, which provides results of comparable quality with respect to highly manually refined predictions. The ability to model correlations between pairwise residues allows it to obtain a significant improvement over standard 1D based approaches. In addition, the method permits the identification of unprecedented metal sites, providing important hints for the work of experimentalists

    HIV-1 tat protein enters dysfunctional endothelial cells via integrins and renders them permissive to virus replication

    Get PDF
    Previous work has shown that the Tat protein of Human Immunodeficiency Virus (HIV)-1 is released by acutely infected cells in a biologically active form and enters dendritic cells upon the binding of its arginine-glycine-aspartic acid (RGD) domain to the α5β1, αvβ3, and αvβ5 integrins. The up-regulation/activation of these integrins occurs in endothelial cells exposed to inflammatory cytokines that are increased in HIV-infected individuals, leading to endothelial cell dysfunction. Here, we show that inflammatory cytokine-activated endothelial cells selectively bind and rapidly take up nano-micromolar concentrations of Tat, as determined by flow cytometry. Protein oxidation and low temperatures reduce Tat entry, suggesting a conformation- and energy-dependent process. Consistently, Tat entry is competed out by RGD-Tat peptides or integrin natural ligands, and it is blocked by anti-α5β1, -αvβ3, and -αvβ5 antibodies. Moreover, modelling-docking calculations identify a low-energy Tat-αvβ3 integrin complex in which Tat makes contacts with both the αv and β3 chains. It is noteworthy that internalized Tat induces HIV replication in inflammatory cytokine-treated, but not untreated, endothelial cells. Thus, endothelial cell dysfunction driven by inflammatory cytokines renders the vascular system a target of Tat, which makes endothelial cells permissive to HIV replication, adding a further layer of complexity to functionally cure and/or eradicate HIV infection

    Coronary atherosclerosis in outlier subjects at the opposite extremes of traditional risk factors: Rationale and preliminary results of the Coronary Atherosclerosis in outlier subjects: Protective and novel Individual Risk factors Evaluation (CAPIRE) study

    Get PDF
    Although it is generally accepted that cardiac ischemic events develop when coronary atherosclerosis (coronary artery disease [CAD]) has reached a critical threshold, this is true only to a first approximation. Indeed, there are patients with severe CAD who do not develop ischemic events; conversely, at the other extreme, individuals with minimal CAD may do. Similar exceptions to this paradigm include patients with diffuse CAD with a low risk factor (RF) profile and others with multiple RFs who develop only mild or no CAD. Therefore, the CAPIRE project was designed to investigate whether the specific study of these extreme outlier populations could provide clues for identification of yet unknown risk or protective factors for CAD and ischemic events. In the CAPIRE study, 481 subjects without previous symptoms or history of ischemic heart disease and normal left ventricular systolic function undergoing coronary computed tomography angiography have been selected based on coronary computed tomography angiography findings and cardiovascular RF profile. Therefore, in the whole population, 2 extreme outlier populations have been identified: (1) subjects with no CAD despite multiple RFs, and (2) at the opposite extreme, subjects with diffuse CAD despite a low-risk profile. Each subject has been characterized by clinical, anatomical imaging variables of CAD and baseline circulating biomarkers. Blood samples were collected and stored in a biological bank for further advanced investigations. The project is designed as a prospective, observational, international multicenter study with an initial cross-sectional analysis of clinical, imaging, and biomolecular variables in the selected groups and a longitudinal 5-year follow-up

    Impact of coronary calcification assessed by coronary CT angiography on treatment decision in patients with three-vessel CAD:insights from SYNTAX III trial

    Get PDF
     : OBJECTIVES: The aim of this study was to determine Syntax scores based on coronary computed tomography angiography (CCTA) and invasive coronary angiography (ICA) and to assess whether heavy coronary calcification significantly limits the CCTA evaluation and the impact of severe calcification on heart team’s treatment decision and procedural planning in patients with three-vessel coronary artery disease (CAD) with or without left main disease. METHODS: SYNTAX III was a multicentre, international study that included patients with three-vessel CAD with or without left main disease. The heart teams were randomized to either assess coronary arteries with coronary CCTA or ICA. We stratified the patients based on the presence of at least 1 lesion with heavy calcification defined as arc of calcium >180° within the lesion using CCTA. Agreement on the anatomical SYNTAX score and treatment decision was compared between patients with and without heavy calcifications. RESULTS: Overall, 222 patients with available CCTA and ICA were included in this trial subanalysis (104 with heavy calcification, 118 without heavy calcification). The mean difference in the anatomical SYNTAX score (CCTA derived—ICA derived) was lower in patients without heavy calcifications [mean (−1.96 SD; +1.96 SD) = 1.5 (−19.3; 22.4) vs 5.9 (−17.5; +29.3), P = 0.004]. The agreement on treatment decision did not differ between patients with (Cohen’s kappa 0.79) or without coronary calcifications (Cohen’s kappa 0.84). The agreement on the treatment planning did not differ between patients with (concordance 80.3%) or without coronary calcifications (concordance 82.8%). CONCLUSIONS: An overall good correlation between CCTA- and ICA-derived Syntax score was found. The presence of heavy coronary calcification moderately influenced the agreement between CCTA and ICA on the anatomical SYNTAX score. However, agreement on the treatment decision and planning was high and irrespective of the presence of calcified lesions
    • …
    corecore